loss contribution
Reviews: LCA: Loss Change Allocation for Neural Network Training
Originality: While lots of works have studied the property of the endpoint found by SGDs, the literature looking at the SGD training dynamics in the context of deep neural networks is sparser, and the loss contribution metric appears novel to me. The paper is therefore original from that aspect. Quality: The paper is in general of good quality. However, few specific points could be improved: - It would be nice to characterize the approximation errors introduced by the first order taylor expension - Authors claim that the Loss contribution is grounded while other Fisher information-based metrics heavily depends on the parametrization chosen. Could the authors expend on this point and provided a more detailed comparison between LC and the metrics introduced in [1] and [13] - In the introduction, authors claim that entire layers drift on the wrong direction during training.
Runtime Freezing: Dynamic Class Loss for Multi-Organ 3D Segmentation
Willoughby, James, Voiculescu, Irina
Segmentation has become a crucial pre-processing step to many refined downstream tasks, and particularly so in the medical domain. Even with recent improvements in segmentation models, many segmentation tasks remain difficult. When multiple organs are segmented simultaneously, difficulties are due not only to the limited availability of labelled data, but also to class imbalance. In this work we propose dynamic class-based loss strategies to mitigate the effects of highly imbalanced training data. We show how our approach improves segmentation performance on a challenging Multi-Class 3D Abdominal Organ dataset.
A Safety-Adapted Loss for Pedestrian Detection in Automated Driving
Lyssenko, Maria, Pimplikar, Piyush, Bieshaar, Maarten, Nozarian, Farzad, Triebel, Rudolph
In safety-critical domains like automated driving (AD), errors by the object detector may endanger pedestrians and other vulnerable road users (VRU). As common evaluation metrics are not an adequate safety indicator, recent works employ approaches to identify safety-critical VRU and back-annotate the risk to the object detector. However, those approaches do not consider the safety factor in the deep neural network (DNN) training process. Thus, state-of-the-art DNN penalizes all misdetections equally irrespective of their criticality. Subsequently, to mitigate the occurrence of critical failure cases, i.e., false negatives, a safety-aware training strategy might be required to enhance the detection performance for critical pedestrians. In this paper, we propose a novel safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training. We exploit the reachability set-based time-to-collision (TTC-RSB) metric from the motion domain along with distance information to account for the worst-case threat quantifying the criticality. Our evaluation results using RetinaNet and FCOS on the nuScenes dataset demonstrate that training the models with our safety-aware loss function mitigates the misdetection of critical pedestrians without sacrificing performance for the general case, i.e., pedestrians outside the safety-critical zone.
- Europe > Germany > Saarland > Saarbrücken (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
Focal Loss in Object Detection
So Focal Loss reduces the loss contribution from easy examples and increases the importance of correcting misclassified examples.) So, let's first understand what Cross-Entropy loss for binary classification. The idea behind Cross-Entropy loss is to penalize the wrong predictions more than to reward the right predictions.